Upgrade to Next.js 16 and Gemini 3.1 Pro Enhancements#593
Upgrade to Next.js 16 and Gemini 3.1 Pro Enhancements#593ngoiyaeric wants to merge 1 commit intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Dev seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
WalkthroughThis PR upgrades to Next.js 16 with React 19.2.0, enables Turbopack and React compiler, switches to Gemini 3.1 Pro with custom-tools support via conditional model selection, adds in-memory context caching for conversation state, and enhances agent reasoning prompts with step-by-step guidance and multi-step analysis workflows. ChangesNext.js 16 & Gemini 3.1 Pro Integration Upgrade
Sequence DiagramsequenceDiagram
participant Client
participant Agent as Agent<br/>(researcher)
participant CacheUtil as Context<br/>Cache
participant ModelUtil as getModel()
participant LLM as Gemini 3.1 Pro
Client->>Agent: Messages + System Prompt
Agent->>CacheUtil: cacheConversationContext(...)
CacheUtil->>CacheUtil: Serialize first ~80% of messages,<br/>store with TTL & timestamp
activate CacheUtil
Note over CacheUtil: In-memory Map keyed by<br/>conversationId + messageCount
deactivate CacheUtil
Agent->>ModelUtil: getModel(hasImage, true)
Note over ModelUtil: usesTools=true selects<br/>gemini-3.1-pro-preview-customtools
ModelUtil-->>Agent: Custom-tools model variant
Agent->>LLM: Stream with enhanced reasoning prompt<br/>+ tools + cached context
LLM-->>Agent: Reasoning steps + tool calls
Agent->>CacheUtil: clearConversationCache(...)<br/>(on completion/expiry)
CacheUtil->>CacheUtil: Remove cached context
Agent-->>Client: Final result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoUpgrade to Next.js 16, React 19.2, and enhance Gemini 3.1 Pro with context caching
WalkthroughsDescription• Upgrade Next.js to v16.2 and React to v19.2 with React Compiler enabled • Add context caching utility for Gemini 3.1 Pro to reduce token usage • Integrate custom tools endpoint for improved agent tool-calling reliability • Enhance system prompts with advanced reasoning approach for geospatial analysis • Update configuration for Next.js 16 standards (turbopack, cacheComponents, proxy) Diagramflowchart LR
A["Next.js 16.2<br/>React 19.2"] --> B["React Compiler<br/>Enabled"]
A --> C["Turbopack<br/>Stable"]
D["Gemini 3.1 Pro"] --> E["Custom Tools<br/>Endpoint"]
D --> F["Context Caching<br/>Utility"]
E --> G["Improved Agent<br/>Tool Calling"]
F --> H["Reduced Token<br/>Usage"]
B --> I["Auto Component<br/>Optimization"]
G --> J["Enhanced Geospatial<br/>Analysis"]
H --> J
File Changes1. lib/utils/context-cache.ts
|
Code Review by Qodo
1. Middleware never runs
|
| import type { NextRequest } from "next/server" | ||
|
|
||
| export function middleware(request: NextRequest) { | ||
| export function proxy(request: NextRequest) { |
There was a problem hiding this comment.
1. Middleware never runs 🐞 Bug ≡ Correctness
proxy.ts exports proxy() instead of exporting middleware() from a middleware.ts entrypoint, so Next.js will not execute this code as middleware. Any request-handling logic in this file (server-action bypass, redirects, future auth) becomes a no-op at runtime.
Agent Prompt
## Issue description
`proxy.ts` will not be executed as Next.js middleware. Next.js middleware must live in `middleware.ts` (or `middleware.js`) and export a function named `middleware` (or a default export).
## Issue Context
This PR appears to have renamed middleware to align with a perceived Next.js 16 change, but Next.js still uses the `middleware.*` convention. As a result, the redirect/auth logic is silently disabled.
## Fix Focus Areas
- proxy.ts[1-24]
## Suggested fix
- Rename `proxy.ts` back to `middleware.ts`.
- Rename `export function proxy` back to `export function middleware` (or `export default function middleware`).
- If you want to keep a `proxy.ts` helper, keep the logic there but re-export it from `middleware.ts`:
- `import { proxy } from './proxy'`
- `export function middleware(req) { return proxy(req) }`
- Ensure the `config.matcher` remains exported from the middleware entrypoint.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| "babel-plugin-react-compiler": "^1.0.0", | ||
| "cross-env": "^7.0.3", | ||
| "eslint": "^8.57.1", | ||
| "eslint-config-next": "^14.2.28", | ||
| "eslint-config-next": "^16.x", | ||
| "postcss": "^8.5.3", |
There was a problem hiding this comment.
2. Eslint/next peer mismatch 🐞 Bug ☼ Reliability
eslint-config-next is upgraded to 16.x while eslint remains on 8.x, but the resolved eslint-config-next@16.2.5 declares a peer dependency of eslint >=9.0.0. This can cause dependency resolution failures or forced peer overrides depending on your npm/CI settings.
Agent Prompt
## Issue description
`eslint-config-next@16.x` requires `eslint >=9`, but the repo is pinned to `eslint ^8.57.1`. This may break installs (ERESOLVE) or require disabling peer checks.
## Issue Context
The lockfile already resolves `eslint-config-next` to 16.2.5 and declares the peer requirement, so the mismatch is present in the committed dependency set.
## Fix Focus Areas
- package.json[112-118]
- package-lock.json[11713-11733]
## Suggested fix
Choose one:
1) Upgrade eslint to a compatible major:
- Set `"eslint": "^9.x"` (and adjust any lint config changes needed).
2) Keep eslint v8:
- Downgrade/pin `eslint-config-next` to a version that supports eslint v8.
Then regenerate `package-lock.json` to reflect the consistent set.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/utils/context-cache.ts (1)
1-115:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftContext cache is never imported or used, and doesn't actually reduce Gemini token costs.
Two distinct concerns:
Dead code: No file in this PR (
researcher.tsx,resolution-search.tsx,index.ts) imports or calls any export fromcontext-cache.ts. The utility is fully unreachable at runtime.Incorrect abstraction: Even if integrated, this implementation stores a local in-memory text copy of messages (
cachedPrefix) but still passes the fullmessagesarray to the model on every call. To actually reduce Gemini API token costs, you must use the Gemini native context caching API: upload a stable context prefix to get a server-sidecachedContentID, then pass that ID in subsequent requests. A localMapcannot intercept or reduce tokens billed by the API.Consider either integrating the
getCachedContext/cacheConversationContextcalls intoresearcher.tsxandresolution-search.tsx, or replacing this with a Gemini-native context cache client that holds acachedContentname/ID and injects it into thecreateGoogleGenerativeAImodel call.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@lib/utils/context-cache.ts` around lines 1 - 115, The context-cache.ts file is dead and its in-memory cachedPrefix won't reduce Gemini token billing; either remove the module or replace/integrate it with Gemini native caching: stop using contextCache/ cachedPrefix and instead call the Gemini caching API to upload a stable prefix and store a returned cachedContent ID (use generateCacheKey, cacheConversationContext, getCachedContext, clearConversationCache as the integration points), then change callers (e.g., researcher.tsx and resolution-search.tsx) to pass the cachedContent ID to createGoogleGenerativeAI instead of the full messages array; if you prefer to keep a local helper, implement cacheConversationContext to call the Gemini upload endpoint and save the cachedContent ID in contextCache (with CACHE_TTL/MAX_CACHE_SIZE semantics) and have getCachedContext return that ID for injection into model requests.lib/agents/resolution-search.tsx (1)
137-175:⚠️ Potential issue | 🟠 Major | ⚡ Quick winModel-specific branding in the system prompt will mislead non-Gemini providers.
Line 137 hardcodes
"(Gemini 3.1 Pro)"as the model's identity, and line 168 embeds"(ARC-AGI-2 Level)"benchmark marketing into an operational instruction. WhengetModel()returns Grok, GPT-5.1, Bedrock, or OpenAI (Line 192 does not passusesTools, so the default path is taken for all key configurations), the model receives instructions asserting it is Gemini 3.1 Pro, which is incorrect and could distort its behaviour.Remove model-specific identity claims from the prompt, or make the prompt text dynamic based on the resolved provider.
🛠️ Proposed fix
-As a geospatial analyst with advanced reasoning capabilities (Gemini 3.1 Pro), your task is to analyze... +As an expert geospatial analyst, your task is to analyze...-1. **Abstract Pattern Recognition (ARC-AGI-2 Level):** Look for non-obvious patterns... +1. **Abstract Pattern Recognition:** Look for non-obvious patterns...🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@lib/agents/resolution-search.tsx` around lines 137 - 175, The prompt in lib/agents/resolution-search.tsx contains hardcoded model branding strings ("(Gemini 3.1 Pro)" and "(ARC-AGI-2 Level)") which wrongly assert a provider identity; update the prompt generation to derive model identity and benchmarks dynamically from getModel() (or remove branding entirely) so non‑Gemini providers (Grok, GPT-5.1, Bedrock, OpenAI) aren’t mislabelled, and ensure the callsite that builds the prompt (reference getModel() and the prompt template in resolution-search.tsx) correctly passes provider/context flags such as usesTools when constructing model config so the prompt matches the actual runtime model.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@lib/agents/researcher.tsx`:
- Around line 25-31: The prompt heading currently hardcodes a vendor string
"Reasoning Approach (Gemini 3.1 Pro Enhanced):" which couples the prompt to a
specific provider; replace that literal with a model-agnostic heading such as
"Reasoning Approach:" (or "Reasoning Approach (model-agnostic):") in
lib/agents/researcher.tsx wherever that string appears, and mirror the same
change in the other file with the same issue (e.g., resolution-search.tsx);
ensure no provider brand names are embedded in prompt strings passed to the
agent so the prompts remain provider-agnostic.
In `@lib/utils/context-cache.ts`:
- Around line 88-96: clearConversationCache currently deletes keys using
key.startsWith(conversationId), causing collisions when another conversationId
begins with the same prefix; update the check in clearConversationCache to
include the separator used in cache keys (e.g., test
key.startsWith(conversationId + ':') and also allow key === conversationId) so
only the exact conversation namespace (and its children) are removed from
contextCache.
In `@next.config.mjs`:
- Line 19: The config enables reactCompiler: true which can increase build/CI
times; run a CI build with reactCompiler set to true (in next.config.mjs) to
measure impact and confirm it's acceptable, and if the build time regression is
unacceptable, revert/remove the reactCompiler: true flag or gate it behind an
environment variable (e.g., process.env.ENABLE_REACT_COMPILER) so CI can opt-in
only after validation.
In `@NEXTJS_GEMINI_RESEARCH_REPORT.md`:
- Line 75: Reference [2] in the Gemini 3.1 Pro benchmarks table cites a
non-authoritative source (vertu.com); replace that citation with the official
Google/DeepMind publication or the Gemini API/models documentation by updating
the reference entry for [2] and any in-text citations in the "Gemini 3.1 Pro"
table (the rows for ARC-AGI-2, APEX-Agents, GPQA Diamond) to point to the
official DeepMind blog post or the Gemini API docs; ensure the new reference URL
and title clearly identify Google/DeepMind as the source and remove or archive
the vertu.com link.
---
Outside diff comments:
In `@lib/agents/resolution-search.tsx`:
- Around line 137-175: The prompt in lib/agents/resolution-search.tsx contains
hardcoded model branding strings ("(Gemini 3.1 Pro)" and "(ARC-AGI-2 Level)")
which wrongly assert a provider identity; update the prompt generation to derive
model identity and benchmarks dynamically from getModel() (or remove branding
entirely) so non‑Gemini providers (Grok, GPT-5.1, Bedrock, OpenAI) aren’t
mislabelled, and ensure the callsite that builds the prompt (reference
getModel() and the prompt template in resolution-search.tsx) correctly passes
provider/context flags such as usesTools when constructing model config so the
prompt matches the actual runtime model.
In `@lib/utils/context-cache.ts`:
- Around line 1-115: The context-cache.ts file is dead and its in-memory
cachedPrefix won't reduce Gemini token billing; either remove the module or
replace/integrate it with Gemini native caching: stop using contextCache/
cachedPrefix and instead call the Gemini caching API to upload a stable prefix
and store a returned cachedContent ID (use generateCacheKey,
cacheConversationContext, getCachedContext, clearConversationCache as the
integration points), then change callers (e.g., researcher.tsx and
resolution-search.tsx) to pass the cachedContent ID to createGoogleGenerativeAI
instead of the full messages array; if you prefer to keep a local helper,
implement cacheConversationContext to call the Gemini upload endpoint and save
the cachedContent ID in contextCache (with CACHE_TTL/MAX_CACHE_SIZE semantics)
and have getCachedContext return that ID for injection into model requests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: f174ce4a-9939-44ee-8631-2b90c0ab6361
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
NEXTJS_GEMINI_RESEARCH_REPORT.mdlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/utils/context-cache.tslib/utils/index.tsnext.config.mjspackage.jsonproxy.tstsconfig.json
📜 Review details
🔇 Additional comments (8)
tsconfig.json (1)
17-17: LGTM.With
"noEmit": true,"jsx": "react-jsx"only affects type-checking; it enables the automatic JSX transform (React 17+), which is appropriate for React 19.2. The.next/dev/types/**/*.tspath on line 36 is the standard Next.js 16 dev-types inclusion.lib/utils/index.ts (1)
60-61: LGTM —gemini-3.1-pro-preview-customtoolsis a valid endpoint confirmed by official Google AI docs.The
gemini-3.1-pro-preview-customtoolsendpoint is documented for use with bash and custom tools, though Google notes "quality fluctuations in some use cases which don't benefit from such tools." SinceusesTools: trueis only passed by theresearcheragent (which always has tool definitions registered), this conditional is well-scoped.proxy.ts (1)
4-20: LGTM — rename is correct for Next.js 16.
proxy.tsreplacesmiddleware.tsand now runs on the Node.js runtime; the required migration is renaming the file and the exported function toproxy. The logic,config, and matcher are unchanged and remain valid.next.config.mjs (2)
14-16: LGTM — top-levelturbopackis correct for Next.js 16.
experimental.turbopackmoved out of experimental in Next.js 16; the configuration now lives at the top level ofnextConfig. Theroot: '.'setting is appropriate given thetranspilePackages: ['QCX', 'mapbox_mcp']entries, which indicate locally-linked packages that need to be resolved from the project root.
19-20: 🏗️ Heavy lift
cacheComponents: trueflips the entire caching model — verify all existing routes are migrated before enabling.
cacheComponentsis the renamed successor toexperimental.dynamicIO, which this project did not previously enable. This is a new opt-in, not a migration of an existing setting.Route segment config (
export const revalidate = ...) still works withoutcacheComponents; once it is enabled, that API is deprecated and you must migrate to"use cache"withcacheLife(). Additionally, Next.js's implicitfetch()caching is disabled under this model — nothing is cached by default.If any route in the codebase uses
export const revalidate,export const dynamic, or relies on automatic fetch de-duplication, those paths will lose their caching behaviour silently after this change. Audit all server components, route handlers, and data-fetching functions for implicit caching assumptions before merging.package.json (3)
76-76: Core framework version alignment looks good.
next,react, andreact-domare upgraded consistently for the stated Next.js 16 / React 19.2 migration.Also applies to: 82-83
108-109: Type definitions are correctly synced with React runtime.Good call updating
@types/reactand@types/react-domalongside the React 19.2 bump.
112-112: Tooling updates match the migration direction.React Compiler and Next ESLint config updates are consistent with the framework upgrade.
Also applies to: 115-115
| **Reasoning Approach (Gemini 3.1 Pro Enhanced):** | ||
| Before using any tool, break down complex queries into logical steps: | ||
| 1. Understand the user's intent and constraints | ||
| 2. Identify which tool(s) can best address each part | ||
| 3. Explain your reasoning when selecting between multiple tools | ||
| 4. Execute tools in the optimal order | ||
|
|
There was a problem hiding this comment.
Same model-identity coupling as in resolution-search.tsx — remove the Gemini brand from the prompt heading.
Line 25's "Reasoning Approach (Gemini 3.1 Pro Enhanced):" heading will be received verbatim by Grok, GPT-5.1, or Bedrock Claude whenever those providers are active. The instruction itself is sound; the brand attribution is the problem.
🛠️ Proposed fix
-**Reasoning Approach (Gemini 3.1 Pro Enhanced):**
+**Reasoning Approach:**📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **Reasoning Approach (Gemini 3.1 Pro Enhanced):** | |
| Before using any tool, break down complex queries into logical steps: | |
| 1. Understand the user's intent and constraints | |
| 2. Identify which tool(s) can best address each part | |
| 3. Explain your reasoning when selecting between multiple tools | |
| 4. Execute tools in the optimal order | |
| **Reasoning Approach:** | |
| Before using any tool, break down complex queries into logical steps: | |
| 1. Understand the user's intent and constraints | |
| 2. Identify which tool(s) can best address each part | |
| 3. Explain your reasoning when selecting between multiple tools | |
| 4. Execute tools in the optimal order |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/agents/researcher.tsx` around lines 25 - 31, The prompt heading currently
hardcodes a vendor string "Reasoning Approach (Gemini 3.1 Pro Enhanced):" which
couples the prompt to a specific provider; replace that literal with a
model-agnostic heading such as "Reasoning Approach:" (or "Reasoning Approach
(model-agnostic):") in lib/agents/researcher.tsx wherever that string appears,
and mirror the same change in the other file with the same issue (e.g.,
resolution-search.tsx); ensure no provider brand names are embedded in prompt
strings passed to the agent so the prompts remain provider-agnostic.
| export function clearConversationCache(conversationId: string): void { | ||
| const keysToDelete: string[] = []; | ||
| for (const [key] of contextCache.entries()) { | ||
| if (key.startsWith(conversationId)) { | ||
| keysToDelete.push(key); | ||
| } | ||
| } | ||
| keysToDelete.forEach(key => contextCache.delete(key)); | ||
| } |
There was a problem hiding this comment.
clearConversationCache has a prefix-collision bug.
key.startsWith(conversationId) will incorrectly delete cache entries belonging to a different conversation whose ID begins with the same substring (e.g., clearing "abc" also clears "abcdef:5"). The separator should be included in the prefix test.
🐛 Proposed fix
- if (key.startsWith(conversationId)) {
+ if (key.startsWith(conversationId + ':')) {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@lib/utils/context-cache.ts` around lines 88 - 96, clearConversationCache
currently deletes keys using key.startsWith(conversationId), causing collisions
when another conversationId begins with the same prefix; update the check in
clearConversationCache to include the separator used in cache keys (e.g., test
key.startsWith(conversationId + ':') and also allow key === conversationId) so
only the exact conversation namespace (and its children) are removed from
contextCache.
| }, | ||
|
|
||
| // reactCompiler and cacheComponents moved to top level in v16 | ||
| reactCompiler: true, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial | ⚡ Quick win
reactCompiler: true increases build times — test in CI before shipping.
The reactCompiler option has been promoted from experimental to stable in Next.js 16 and is not enabled by default while the team gathers build performance data; enabling it increases compile times in development and builds because the React Compiler relies on Babel.
This is a non-breaking improvement, but confirm the build time increase is acceptable in your CI pipeline.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@next.config.mjs` at line 19, The config enables reactCompiler: true which can
increase build/CI times; run a CI build with reactCompiler set to true (in
next.config.mjs) to measure impact and confirm it's acceptable, and if the build
time regression is unacceptable, revert/remove the reactCompiler: true flag or
gate it behind an environment variable (e.g., process.env.ENABLE_REACT_COMPILER)
so CI can opt-in only after validation.
|
|
||
| ## References | ||
| [1] [Upgrading: Version 16 | Next.js](https://nextjs.org/docs/app/guides/upgrading/version-16) | ||
| [2] [Gemini 3.1 Pro: #1 Reasoning AI Benchmarks & API Guide | Google DeepMind](https://vertu.com/ai-tools/gemini-3-1-pro-benchmarks-api-specs-developer-guide-in-2026/) |
There was a problem hiding this comment.
Reference [2] uses a non-authoritative source for benchmark claims.
vertu.com is a luxury smartphone brand, not an official Google or DeepMind publication. All benchmark figures cited in the Gemini 3.1 Pro table (ARC-AGI-2, APEX-Agents, GPQA Diamond) trace back to this single non-authoritative reference. Replace with the official Google DeepMind blog post or the Gemini API models documentation.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@NEXTJS_GEMINI_RESEARCH_REPORT.md` at line 75, Reference [2] in the Gemini 3.1
Pro benchmarks table cites a non-authoritative source (vertu.com); replace that
citation with the official Google/DeepMind publication or the Gemini API/models
documentation by updating the reference entry for [2] and any in-text citations
in the "Gemini 3.1 Pro" table (the rows for ARC-AGI-2, APEX-Agents, GPQA
Diamond) to point to the official DeepMind blog post or the Gemini API docs;
ensure the new reference URL and title clearly identify Google/DeepMind as the
source and remove or archive the vertu.com link.
|
@coderabbitai I've analyzed the build errors. Here are my questions to ensure I'm aligned with your expectations: Dynamic Rendering: The routes /api/embeddings, /api/chats, and the root / page are failing during prerendering due to dynamic dependencies. I plan to mark them with export const dynamic = 'force-dynamic'. Is it acceptable for these to be dynamic, or do you require static generation for any of them? |
|
Tip For best results, initiate chat on the files or code changes.
1. Dynamic Rendering (
|
| Issue | Recommended Fix |
|---|---|
/api/embeddings, /api/chats prerender failure |
Add export const dynamic = 'force-dynamic' |
| Local path in embeddings route | Guard with fs.existsSync + env variable + graceful 503 |
UUID on root / page |
Generate client-side via useEffect + crypto.randomUUID() in a Client Component |
| Turbopack filesystem warning | Lazy-import fs inside the handler body |
This PR upgrades the core framework to Next.js 16 and React 19.2, enables the React Compiler, and enhances the Gemini 3.1 Pro integration with the customtools endpoint and context caching.
Summary by CodeRabbit
New Features
Performance
Documentation